5 research outputs found

    Multi-agent location system in wireless networks

    Get PDF
    In this paper we propose a flexible Multi-Agent Architecture together with a methodology for indoor location which allows us to locate any mobile station (MS) such as a Laptop, Smartphone, Tablet or a robotic system in an indoor environment using wireless technology. Our technology is complementary to the GPS location finder as it allows us to locate a mobile system in a specific room on a specific floor using the Wi-Fi networks. The idea is that any MS will have an agent known at a Fuzzy Location Software Agent (FLSA) with a minimum capacity processing at its disposal which collects the power received at different Access Points distributed around the floor and establish its location on a plan of the floor of the building. In order to do so it will have to communicate with the Fuzzy Location Manager Software Agent (FLMSA). The FLMSAs are local agents that form part of the management infrastructure of the Wi-Fi network of the Organization. The FLMSA implements a location estimation methodology divided into three phases (measurement, calibration and estimation) for locating mobile stations (MS). Our solution is a fingerprint-based positioning system that overcomes the problem of the relative effect of doors and walls on signal strength and is independent of the network device manufacturer. In the measurement phase, our system collects received signal strength indicator (RSSI) measurements from multiple access points. In the calibration phase, our system uses these measurements in a normalization process to create a radio map, a database of RSS patterns. Unlike traditional radio map-based methods, our methodology normalizes RSS measurements collected at different locations on a floor. In the third phase, we use Fuzzy Controllers to locate an MS on the plan of the floor of a building. Experimental results demonstrate the accuracy of the proposed method. From these results it is clear that the system is highly likely to be able to locate an MS in a room or adjacent room

    Minimal Decision Rules Based on the A Priori Algorithm

    Full text link
    Based on rough set theory many algorithms for rules extraction from data have been proposed. Decision rules can be obtained directly from a database. Some condition values may be unnecessary in a decision rule produced directly from the database. Such values can then be eliminated to create a more comprehensi- ble (minimal) rule. Most of the algorithms that have been proposed to calculate minimal rules are based on rough set theory or machine learning. In our ap- proach, in a post-processing stage, we apply the Apriori algorithm to reduce the decision rules obtained through rough sets. The set of dependencies thus obtained will help us discover irrelevant attribute values

    Modelo matemático paramétrico de estimación para proyectos de data mining

    Full text link
    Data Mining surgió como línea de investigación a finales de la década de los 80 con el propósito de buscar una solución al problema de descubrimiento de conocimiento en bases de datos. El conocimiento adquirido de las bases de datos se utiliza para dar soporte a los procesos de toma de decisiones en las empresas. En este sentido, el desarrollo de técnicas de Data Mining sirvió como soporte para los proyectos de CRM. Desde entonces son muchos los proyectos de este tipo que se han venido desarrollando en todo tipo de organizaciones. Sin embargo, aún a día de hoy estos proyectos se realizan sin una estimación clara de ningún tipo de recursos. Como consecuencia, si bien son muchos los proyectos que se han terminado con éxito, son numerosas las referencias de fracasos de proyectos de Data Mining por falta de estimación al comienzo de los mismos. Los veinte años de investigación en Data Mining han dado como resultado un gran número de referencias bibliográficas referente a algoritmos de descubrimiento, sin embargo, son escasas las referencias que abordan el problema de aplicación de Data Mining en una empresa desde la perspectiva de la Ingeniería del Software. De hecho la única aproximación es la definición del modelo de proceso estándar CRISP-DM. Tanto los estándares de modelo de proceso para desarrollo de software como el propuesto en CRISP-DM incluyen procesos y tareas similares con relación a la generación del Presupuesto y del Plan de proyecto. En el caso de desarrollo de software, la estimación de la duración y del esfuerzo que llevará la realización del proyecto se apoya en múltiples métodos de estimación como SLIM, SEER-SEM, PRICE-S o COCOMO, entre otros. Si lo que se trata es de hacer la estimación para un proyecto de Data Mining estos métodos no resultan apropiados, dado que su entrada principal es el tamaño del software a desarrollar y en los proyectos de Data Mining no se trata de desarrollar software. Aunque para ciertos tipos de problemas de Data Mining hay métodos de estimación en fases avanzadas del proyecto no hay un método genérico de estimación, cuyos resultados, esfuerzo y tiempo, sirvan como punto de partida para realizar el plan de proyecto y el presupuesto. Esta es la motivación central de este trabajo de tesis doctoral en el que se propone establecer un método paramétrico de estimación para proyectos de Data Mining. Con este propósito se definen en esta tesis los principales drivers de coste para establecer, basándose en proyectos reales y mediante regresión lineal la ecuación del modelo

    A survey of data mining and knowledge discovery process models and methodologies

    No full text
    Up to now, many data mining and knowledge discovery methodologies and process models have been developed, with varying degrees of success. In this paper, we describe the most used (in industrial and academic projects) and cited (in scientific literature) data mining and knowledge discovery methodologies and process models, providing an overview of its evolution along data mining and knowledge discovery history and setting down the state of the art in this topic. For every approach, we have provided a brief description of the proposed knowledge discovery in databases (KDD) process, discussing about special features, outstanding advantages and disadvantages of every approach. Apart from that, a global comparative of all presented data mining approaches is provided, focusing on the different steps and tasks in which every approach interprets the whole KDD process. As a result of the comparison, we propose a new data mining and knowledge discovery process named refined data mining process for developing any kind of data mining and knowledge discovery project. The refined data mining process is built on specific steps taken from analyzed approaches.1.257 JCR (2010) Q3, 60/108 Computer science, artificial intelligenceUE

    An engineering approach to data mining projects

    No full text
    Both the number and complexity of Data Mining projects has increased in late years. Unfortunately, nowadays there isn’t a formal process model for this kind of projects, or existing approaches are not right or complete enough. In some sense, present situation is comparable to that in software that led to ’software crisis’ in latest 60’s. Software Engineering matured based on process models and methodologies. Data Mining’s evolution is being parallel to that in Software Engineering. The research work described in this paper proposes a Process Model for Data Mining Projects based on the study of current Software Engineering Process Models (IEEE Std 1074 and ISO 12207) and the most used Data Mining Methodology CRISP-DM (considered as a “facto” standard) as basic references.Sin financiación0.275 SJR (2007) Q2, 92/281 Computer science (miscellaneous); Q4, 82/123 Theoretical Computer ScienceUE
    corecore